越来越多的文献证明了使用射频(RF)信号在遮挡和照明不良的情况下实现关键的计算机视觉任务的可行性。它利用RF信号遍历墙壁和遮挡,以使壁姿势估计,动作识别,场景字幕和人类重新识别。但是,与可以由人工工人标记的RGB数据集不同,标记RF信号是一项艰巨的任务,因为这些信号不是人类的可解释。但是,收集未标记的RF信号非常容易。使用此类未标记的RF数据以无监督的方式学习有用的表示形式将是非常有益的。因此,在本文中,我们探讨了调整基于RGB的无监督表示为RF信号的可行性。我们表明,尽管对比度学习已成为无监督的表示从图像和视频学习的主要技术,但当使用RF信号应用于感知人类时,这种方法的性能较差。相反,预测性无监督学习方法学习可用于多个基于RF的传感任务的高质量表示。我们的经验结果表明,这种方法的表现优于基于RF的最先进的人类对各种任务的感知,从而开放了从这种新颖方式中学习的可能性。
translated by 谷歌翻译
在高光中,幽灵伪像,运动模糊和低忠诚度是来自多个低动态范围(LDR)图像的高动态范围(HDR)成像的主要挑战。这些问题来自使用中等暴露图像作为先前方法中的参考框架。为了应对它们,我们建议使用暴露不足的图像作为避免这些问题的参考。但是,暴露不足图像的黑暗区域中的沉重噪音成为一个新问题。因此,我们提出了一个关节HDR和Denoising管道,其中包含两个子网络:(i)通过利用暴露先验来适应性的denoise输入LDR; (ii)金字塔级联融合网络(PCFNET),以多尺度的方式引入了注意机制和级联结构。为了进一步利用这两个范式,我们提出了一个选择性和联合HDR和DeNoising(SJ-HD $^2 $ R)成像框架,利用特定方案的先验来进行路径选择,准确性超过93.3 $ \%$ $ 。我们创建了第一个关节HDR和Denoising基准数据集,该数据集包含各种具有挑战性的HDR和DeNoising场景,并支持参考图像的切换。广泛的实验结果表明,我们的方法实现了与以前的方法相比的卓越性能。
translated by 谷歌翻译
现实世界数据往往展现出长期分布,重量级别不平衡,其中大多数课程可以主导培训过程并改变少数阶层的决策边界。最近,研究人员调查了监督对长尾识别的对比学习的潜力,并证明它提供了强大的性能增益。在本文中,我们表明,虽然监督对比学习可以有助于提高性能,但过去的基线通过不平衡数据分布引入的均匀性差。这种差的均匀性在来自特征空间中具有差的少数阶级的样品中表现出来。为了解决这个问题,我们提出了有针对性的监督对比学习(TSC),从而提高了极度上的特征分布的均匀性。 TSC首先生成一组均匀分布在极度上的目标。然后,在训练期间使不同类别的特征会聚到这些不同的和均匀分布的目标。这迫使所有类别,包括少数群体类别,以维持特征空间中的统一分布,改善了类边界,即使在存在长尾数据的情况下也能提供更好的泛化。多个数据集的实验表明,TSC在长尾识别任务上实现了最先进的性能。
translated by 谷歌翻译
对比学习是机器学习中最快的研究领域之一,因为它可以在没有标记数据的情况下学习有用的表示。然而,对比学学习易于特征抑制,即,它可能会丢弃与感兴趣的任务相关的重要信息,并学习无关的功能。过去的工作通过消除无关信息的手工制作的数据增强解决了这一限制。然而,这种方法不适用于所有数据集和任务。此外,当一个属性可以抑制与其他属性相关的特征时,数据增强在解决多属性分类中的功能抑制中失败。在本文中,我们分析了对比学习的目标函数,并正式证明它易于特征抑制。然后,我们提出预测对比学习(PCL),一种学习对特征抑制具有鲁棒的无监督表示的框架。关键的想法是强制学习的表示来预测输入,因此防止它丢弃重要信息。广泛的实验验证PCL是否强大地对特征抑制和优于各种数据集和任务的最先进的对比学习方法。
translated by 谷歌翻译
后敏感经验重播(她)是一种常见的脱离政策深度加强学习算法的目标,以解决面向目标的任务;它非常适合提供仅提供稀疏奖励的机器人操纵任务。在她身上,轨迹和过渡都是均匀地对训练进行采样的。然而,并非所有的代理商的经历都同样促进训练,因此天真的统一采样可能导致学习效率低下。在本文中,我们提出了与她(DTGSH)的多样性轨迹和目标选择。首先,根据目标状态的多样性对由决定点过程(DPP)的模型进行采样进行采样。其次,通过使用K-DPP从轨迹中选择具有不同目标状态的转换。我们在模拟机器人环境中评估五个挑战机器人操纵任务的DTGSH,在那里我们表明我们的方法可以更快地学到更快,并且比所有任务的其他最先进的方法达到更高的性能。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译